59 participants worth of data usable data (63 finished the experiment in total, but 4 have about 0% correct on naming trials. Note that the participant IDs don’t quite match up to the total nunber of participants. The mismatch is because the RAs accidentally skipped a couple of numbers), recruited during the summer and through SONA. We’re had been aiming for 60.

120 items per participant, encountered in lists of 12 items. With three conditions (binocular, cfs, not-studied), there were four items in each condition in each list. Of those four items, three were encountered in a “go” test trial, and one was in a “no-go” test trial. So, 1/4 of test-trials were no-go. This leaves up to 30 items in go trials per condition, per participant (but the actual numer will be different, depending on how many objects were correctly named).

Figures

Naming Accuracy

The following plot shows the proportion of items that participants named correctly. The first plot shows each participant invididually to see which ones didn’t try during the naming condition. These ones were excluded from the group-level plot and all further plots.

Note that in this group-level plot (and all other following), each participants’ average performance is calculated separetely and then averaged together. Error bars reflect the standard error of the mean, averaging across participant averages (note that I had previously been showing 95% CI).

PAS

Here are the PAS ratings provided to the first and second encounter of items in the study phase. Only items studied in the CFS condition are shown, because these are the only trials in which participants were asked to provide a PAS rating.

Most participants gave ratings of 2 or 3 on most CFS trials. A few were better at stopping the trial before being able to identify the object (provided mostly PAS 2 ratings), but I wouldn’t rule out that those participants were simply pressing 2 because they thought that was the ‘correct’ response.

Go/No-go accuracy

The following plot shows proportion correct on the no/no-go trials (whether participants either said an item was appearing, or correctly waited). Raw accuracy is high. I didn’t calculate anything like d’ given that performance was basically at ceiling for all participants, and so d’ would be infinity.

Percentiles

Percentiles are first calculated within participants, then averaged across participants.

Note that because these lines are conditioned on accurately naming the cue item, there are different numbers of trial in each condition.

There is one thing I’m somewhat worried about. In the previous experiment (topic of current CFS paper), we showed that CFS reveals dissociable learning processes (lateral vs. vertical connections). Here, we’re trying to show that a visual recollection-like process can occur. In particular, we’re trying to demonstrate that, when given just one part of an object, participants pattern-complete other parts – that they not only have lateral links between parts of objects but that they are actually able to generate the parts on the nodes of those lateral links. But, how important is it for this experiment that participants have access to a qualitatively different kind of information when the objects were studied under CFS as compared to when they were studied binocularly?

I had initially been viewing the binocular trials as a kind of positive control; if RT didn’t increase when participants were fully aware at study, then there would be little hope of RT increasing for objects that were studied under CFS. But, what out of these data shows that the objects studied under CFS are different than the objects studied binocularly? To what extent are we sure that the lateral connections we dissociated in Experiment 1 are the same ones driving RT speed up in the CFS condition here? Put another way, given that there is an RT speed up for objects that weren’t named but were studied binocularly, does the CFS condition buy us anything other than more trials where participants weren’t likely to have been able to name the objects?

These cdf plots are shown for each participant

These next plots contain similar information as the ones above, but now conditions are shown as different from Not-Studied.

Note that in these participant-level plots, it’s a little easier to see that this grouping of naming accuracy x condition – combined with excluding trials where participants were incorrect in go-nogo trials – leaves some participants without any data in some of the conditions. For example, participant 5 didn’t name any of the objects in the Not Studied condition, so participant 5’s graph doesn’t have any ‘cue_correct_fct’ difference lines (there were no Not Studied items to use as a baseline).

The above plots have just 4 percentiles, because that was how many were used in a paper Dave sent me. Previously, I had been grouping the data into 5 percentiles. Below are the different plots with 5 percentiles.

## Fitting one lmer() model. [DONE]
## Calculating p-values. [DONE]

For the actual stats, I ran a linear-mixed effects model to test for 1) an interaction between study condition x percentile when object isn’t named, 2) a post-hoc contrast to test for an effect of CFS when object isn’t named, and 3) a test for an effect of CFS when object isn’t named, at the first quantile.

These ended up being as we wanted them. There was no interaction between condition x percentile, but there was an effect of study under CFS (collapsing across percentiles), and that includes an effect os study at the first percentile.

## Mixed Model Anova Table (Type 2 tests, KR-method)
## 
## Model: RT ~ Condition * Percentile * cue_correct_fct + (1 | subject)
## Data: rt_quant0
##                                 Effect        df         F p.value
## 1                            Condition 1, 862.57   7.35 **    .007
## 2                           Percentile 3, 862.12 17.76 ***  <.0001
## 3                      cue_correct_fct 1, 885.77    4.56 *     .03
## 4                 Condition:Percentile 3, 862.12      0.44     .73
## 5            Condition:cue_correct_fct 1, 862.64      0.04     .84
## 6           Percentile:cue_correct_fct 3, 862.12 11.20 ***  <.0001
## 7 Condition:Percentile:cue_correct_fct 3, 862.12      0.50     .68
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '+' 0.1 ' ' 1
## cue_correct_fct = 0:
##  Condition     lsmean         SE     df   lower.CL  upper.CL t.ratio
##  Binocular 0.21440089 0.03554553 122.00 0.16413192 0.2646699   6.032
##  CFS       0.14574874 0.03554553 122.00 0.09547977 0.1960177   4.100
##  p.value
##   <.0001
##   0.0004
## 
## cue_correct_fct = 1:
##  Condition     lsmean         SE     df   lower.CL  upper.CL t.ratio
##  Binocular 0.15763001 0.03697874 139.22 0.10533417 0.2099258   4.263
##  CFS       0.09865568 0.03716827 141.74 0.04609181 0.1512196   2.654
##  p.value
##   0.0002
##   0.0321
## 
## Results are averaged over the levels of: Percentile 
## Degrees-of-freedom method: satterthwaite 
## Confidence level used: 0.95 
## Conf-level adjustment: scheffe method with dimensionality 2 
## P value adjustment: scheffe method with dimensionality 2
## Percentile = 0.25:
##  Condition cue_correct_fct    lsmean         SE     df   lower.CL
##  Binocular 0               0.2702125 0.05318260 464.05 0.16384729
##  CFS       0               0.1795816 0.05318260 464.05 0.07321641
##  Binocular 1               0.3093042 0.05621347 523.92 0.19687725
##  CFS       1               0.2286941 0.05663217 532.29 0.11542972
##   upper.CL t.ratio p.value
##  0.3765777   5.081  <.0001
##  0.2859468   3.377  0.0235
##  0.4217311   5.502  <.0001
##  0.3419584   4.038  0.0029
## 
## Degrees-of-freedom method: satterthwaite 
## Confidence level used: 0.95 
## Conf-level adjustment: scheffe method with dimensionality 4 
## P value adjustment: scheffe method with dimensionality 4

Extra details about mixed-model

Here are a couple of details about how I specified the mixed-effects model. I fit the model with the R package lme4 (I think we had a lab meeting dedicated to fitting mixed-effects model with this package a while ago). I assumed fixed effects for the Condition, Percentile, and naming accuracy. Percentiles were treated as a categorical (not continuous) variable. The model I fit included interactions for all of these terms. Additionally, I included a random intercept for participants. However, I did not include any random slopes (i.e., no interaction between participant and the other predictors). Finally, following Kevin’s advice, I fit the model to differences of quantiles, rather than the quantiles directly.